Concerning the poor privacy and flexibility of traditional lifetime estimation for human motion, a lifetime estimation system for human motion was proposed, by analyzing the amplitude variation of WiFi Channel State Information (CSI). In this system, the continuous and complex lifetime estimation problem was transformed into a discrete and simple human motion detection problem. Firstly, the CSI was collected with filtering out the outliers and noise. Secondly, Principal Component Analysis (PCA) was used to reduce the dimension of subcarriers, obtaining the principal components and the corresponding eigenvectors. Thirdly, the variance of principal components and the mean of first difference of eigenvectors were calculated, and a Back Propagation Neural Network (BPNN) model was trained with the ratio of above two parameters as eigenvalue. Fourthly, human motion detection was achieved by the trained BP neural network model, and the CSI data were divided into some segments with equal width when the human motion was detected. Finally, after the human motion detection being performed on all CSI segments, the human motion lifetime was estimated according to the number of CSI segments with human motion detected. In real indoor environment, the average accuracy of human motion detection can reach 97% and the error rate of human motion lifetime is less than 10%. The experimental results show that the proposed system can effectively estimate the lifetime of human motion.
Polynomial interpolation technique is a common approximation method in approximation theory, which is widely used in numerical analysis, signal processing, and so on. Traditional polynomial interpolation algorithms are mainly developed by combining numerical analysis with experimental results, lacking of unified theoretical description and regular solution. A uniform theoretical framework for polynomial interpolation algorithm based on osculating polynomial approximation theory was proposed here. Existing interpolation algorithms could be analyzed and new algorithms could be developed under this framework, which consists of the number of sample points, osculating order for sample points and derivative approximation rules. The presentation of existing mainstream interpolation algorithms was analyzed in proposed framework, and the general process for developing new algorithms was shown by using a four-point and two-order osculating polynomial interpolation. Theoretical analysis and numerical experiments show that almost all mainstream polynomial interpolation algorithms belong to osculating polynomial interpolation, and their effects are strongly related to the number of sampling points, order of osculating, and derivative approximation rules.
To overcome slow convergence velocity of Particle Swarm Optimization (PSO) which falls into local optimum easily, the paper proposed a new approach, a PSO algorithm using opposition-based learning and adaptive escape. The proposed algorithm divided states of population evolution into normal state and premature state by setting threshold. If popolation is in normal state, standard PSO algorithm was adopted to evolve; otherwise, it falls into "premature", the algorithm with opposition-based learning strategy and adaptive escape was adopted, the individual optimal location generates the opposite solution by opposition-based learning, increases the learning ability of particle, enhances the ability to escape from local optimum, and raises the optimizing rate. Experiments were conducted on 8 classical benchmark functions, the experimental results show that the proposed algorithm has better convergence velocity and precision than classical PSO algorithm, such as Fully Imformed Particle Swarm optimization (FIPS), self-organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration Coefficients (HPSO-TVAC), Comprehensive Learning Particle Swarm Optimizer (CLPSO), Adaptive Particle Swarm Optimization (APSO), Double Center Particle Swarm Optimization (DCPSO) and Particle Swarm Optimization algorithm with Fast convergence and Adaptive escape (FAPSO).
Concerning that the original Surface Variation based Local Outlier Factor (SVLOF) cannot filter out the outliers on edges or corners of three-dimensional solid, a new near outlier detection algorithm of scattered point cloud was proposed. This algorithm firstly defined SVLOF on the k neighborhood-like region, and expanded the definition of SVLOF. The expanded SVLOF can not only filter outliers on smooth surface but also filter outliers on edges or corners of three-dimensional solid. At the same time, it still retains the space of threshold value enough of original SVLOF. The experimental results of the simulation data and measured data show that the new algorithm can detect the near outliers of scattered point cloud effectively without changing the efficiency obviously.
The domain concepts are complex, various and hard to capture the development of concepts in software engineering. It's difficult for students to understand and remember. A new effective method which extracts the historical evolution information on software engineering was proposed. Firstly, the candidate sets included entities and entity relationships from Wikipedia were extracted with the Nature Language Processing (NLP) and information extraction technology. Secondly, the entity relationships which being closest to historical evolution from the candidate sets were extracted using TextRank; Finally, the knowledge base was constructed by quintuples composed of the neighboring time entities and concept entities with concerning the key entity relationship. In the process of information extraction, TextRank algorithm was improved based on the text semantic features to increase the accuracy rate. The results verify the effectiveness of the proposed algorithm, and the knowledge base can organize the concepts in software engineering field together according to the characteristics of time sequence.
A bridge crack measurement system based on binocular stereo vision technology was proposed considering the low efficiency, high cost and low precision of bridge cracks measurement at home and abroad. The system realized by using some binocular stereo vision methods like camera calibration, image matching and three dimensional coordinates reconstruction to calculate the width and the length of bridge cracks. The measured results by binocular vision and by monocular vision system under the same conditions were compared, which show that using the binocular vision measurement system made width relative error keep within 10% and length relative error keep within 1% steadily, while the results measured by monocular vision were changed widely in different angles with a maximum width relative error 19.41% and a maximum length relative error 54.35%. The bridge crack measurement system based on binocular stereo vision can be used in practical well with stronger robustness and higher precision.